76 research outputs found

    Toward a model of computational attention based on expressive behavior: applications to cultural heritage scenarios

    Get PDF
    Our project goals consisted in the development of attention-based analysis of human expressive behavior and the implementation of real-time algorithm in EyesWeb XMI in order to improve naturalness of human-computer interaction and context-based monitoring of human behavior. To this aim, perceptual-model that mimic human attentional processes was developed for expressivity analysis and modeled by entropy. Museum scenarios were selected as an ecological test-bed to elaborate three experiments that focus on visitor profiling and visitors flow regulation

    Segmentation en caractères individuels dans des images de scènes naturelles

    Get PDF
    Grâce à l'essor des appareils miniatures surmontés de caméras basse résolution, parfois même un peu gadgets, de nouveaux challenges sont apparus pour analyser les images de scènes naturelles. Le texte présent dans ces images est difficilement reconnu par les logiciels OCR actuels en raison des nombreuses dégradations. Nous détaillons dans cet article une méthode de segmentation de caractères collés présents dans des images photographiques. Le principal but est de segmenter ces caractères en composants individuels pour faciliter et améliorer leur reconnaissance. Les premiers résultats de notre méthode éprouvée sur une base de données publique sont également présentés

    Memorability of natural scenes: the role of attention

    Get PDF
    International audienceThe image memorability consists in the faculty of an image to be recalled after a period of time. Recently, the memorability of an image database was measured and some factors responsible for this memorability were highlighted. In this paper, we investigate the role of visual attention in image memorability around two axis. The first one is experimental and uses results of eye-tracking performed on a set of images of different memorability scores. The second investigation axis is predictive and we show that attention-related features can advantageously replace low-level features in image memorability prediction. From our work it appears that the role of visual attention is important and should be more taken into account along with other low-level features

    A Three-Level Computational Attention Model

    Get PDF
    This article deals with a biologically-motivated three-level computational attention model architecture based on the rarity and the information theory framework. It mainly focuses on a low-level step which aims in fastly highlighting important areas and a middle-level step which analyses the behaviour of the detected areas. Their application on both still images and videos provide results to be used by the third high-level step

    One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget

    Full text link
    Introducing sparsity in a neural network has been an efficient way to reduce its complexity while keeping its performance almost intact. Most of the time, sparsity is introduced using a three-stage pipeline: 1) train the model to convergence, 2) prune the model according to some criterion, 3) fine-tune the pruned model to recover performance. The last two steps are often performed iteratively, leading to reasonable results but also to a time-consuming and complex process. In our work, we propose to get rid of the first step of the pipeline and to combine the two other steps in a single pruning-training cycle, allowing the model to jointly learn for the optimal weights while being pruned. We do this by introducing a novel pruning schedule, named One-Cycle Pruning, which starts pruning from the beginning of the training, and until its very end. Adopting such a schedule not only leads to better performing pruned models but also drastically reduces the training budget required to prune a model. Experiments are conducted on a variety of architectures (VGG-16 and ResNet-18) and datasets (CIFAR-10, CIFAR-100 and Caltech-101), and for relatively high sparsity values (80%, 90%, 95% of weights removed). Our results show that One-Cycle Pruning consistently outperforms commonly used pruning schedules such as One-Shot Pruning, Iterative Pruning and Automated Gradual Pruning, on a fixed training budget.Comment: Accepted at Sparsity in Neural Networks (SNN 2021

    Computational Attention for Defect Localisation

    Get PDF
    This article deals with a biologically-motivated three-level computational attention model architecture based on the rarity and the information theory framework. It mainly focuses on a low-level step and its application in pre-attentive defect localisation for apple quality grading and tumour localisation for medical images

    Computational Attention for Event Detection

    Get PDF
    This article deals with a biologically-motivated three-level computational attention model architecture based on the rarity and the information theory framework. It mainly focuses on low-level and medium-level steps and their application in pre-attentive detection of tumours in CT scans and unusual events in audio recordings
    • …
    corecore